Goto

Collaborating Authors

 submodular meta-learning


Submodular Meta-Learning

Neural Information Processing Systems

In this paper, we introduce a discrete variant of the Meta-learning framework. Meta-learning aims at exploiting prior experience and data to improve performance on future tasks. By now, there exist numerous formulations for Meta-learning in the continuous domain. Notably, the Model-Agnostic Meta-Learning (MAML) formulation views each task as a continuous optimization problem and based on prior data learns a suitable initialization that can be adapted to new, unseen tasks after a few simple gradient updates. Motivated by this terminology, we propose a novel Meta-learning framework in the discrete domain where each task is equivalent to maximizing a set function under a cardinality constraint.


Review for NeurIPS paper: Submodular Meta-Learning

Neural Information Processing Systems

Summary and Contributions: In this paper, the authors describe a meta-learning framework extension to the discrete setting. Specifically, they consider the cases where the functions that the tasks aim to maximize are monotone and submodular set functions. They show that both of these algorithms are at least 1/2-optimal. They also present a meta-greedy algorithm (which chooses the better solution between the previous two algorithms) and prove that it is at least 0.53-optimal. The authors also present a randomized meta-greedy algorithm. Finally, the paper also describes results from applying the above algorithms to two practical problems: ride-sharing and movie-recommendation.


Review for NeurIPS paper: Submodular Meta-Learning

Neural Information Processing Systems

Three reviewers (two of which are knowledgeable) are in favor of acceptance. R2 is not convinced of the strength of the contribution, and suggests that a more thorough "Broader Impact" analysis of the approach in this paper --- on the other hand, s/he does not strongly object to this paper being accepted.


Submodular Meta-Learning

Neural Information Processing Systems

In this paper, we introduce a discrete variant of the Meta-learning framework. Meta-learning aims at exploiting prior experience and data to improve performance on future tasks. By now, there exist numerous formulations for Meta-learning in the continuous domain. Notably, the Model-Agnostic Meta-Learning (MAML) formulation views each task as a continuous optimization problem and based on prior data learns a suitable initialization that can be adapted to new, unseen tasks after a few simple gradient updates. Motivated by this terminology, we propose a novel Meta-learning framework in the discrete domain where each task is equivalent to maximizing a set function under a cardinality constraint.